翻訳と辞書
Words near each other
・ Shargh
・ Shargleam Blackcap
・ Shargë
・ Shargʻun
・ Sharh
・ Sharh al-isharat
・ Sharh al-Mawahib al-Ladunniyyah
・ Sharh Sahih Muslim
・ Sharh Usul al-Kafi
・ Sharhabeel ibn Hasana
・ Shared lane marking
・ Shared leadership
・ Shared lives
・ Shared Location Information Platform
・ Shared medium
Shared memory
・ Shared mesh
・ Shared National Credit Program
・ Shared nothing architecture
・ Shared parenting
・ Shared parking
・ Shared ranch
・ Shared reading
・ Shared register
・ Shared residency in English law
・ Shared resource
・ Shared Risk Resource Group
・ Shared secret
・ Shared services
・ Shared services center


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Shared memory : ウィキペディア英語版
Shared memory

In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors.
Using memory for communication inside a single program, for example among its multiple threads, is also referred to as shared memory.
==In hardware==

In computer hardware, ''shared memory'' refers to a (typically large) block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiprocessor computer system.
Shared memory systems may use:
* uniform memory access (UMA): all the processors share the physical memory uniformly;
* non-uniform memory access (NUMA): memory access time depends on the memory location relative to a processor;
* cache-only memory architecture (COMA): the local memories for the processors at each node is used as cache instead of as actual main memory.
A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to a same location. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory, which has two complications:
* access time degradation: when several processors try to access the same memory location it causes contention. Shared memory computers cannot scale very well. Most of them have ten or fewer processors;
* lack of data coherence: whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data. Such cache coherence protocols can, when they work well, provide extremely high-performance access to shared information between multiple processors. On the other hand, they can sometimes become overloaded and become a bottleneck to performance.
Technologies like crossbar switches, Omega networks, HyperTransport or front-side bus can be used to dampen the bottleneck-effects.
In case of a heterogeneous system architecture (processor architecture that integrates different types of processors, such as CPUs and GPUs, with shared memory), the memory management unit (MMU) of the CPU and the input–output memory management unit (IOMMU) of the GPU have to share certain characteristics, like a common address space.
The alternatives to shared memory are distributed memory and distributed shared memory, each having a similar set of issues.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Shared memory」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.